Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection

نویسندگان

چکیده

Adversarial attacks pose a major threat to machine learning and the systems that rely on it. In cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for domain with tabular data must be realistic within domain. This work establishes fundamental constraint levels required achieve realism introduces Adaptative Perturbation Pattern Method (A2PM) fulfill these constraints in gray-box setting. A2PM relies pattern sequences independently adapted characteristics each class create valid coherent perturbations. The proposed method was evaluated case study two scenarios: Enterprise Internet Things (IoT) networks. Multilayer Perceptron (MLP) Random Forest (RF) classifiers were created regular training, using CIC-IDS2017 IoT-23 datasets. scenario, targeted untargeted performed against classifiers, compared original network traffic flows assess their realism. obtained results demonstrate provides scalable generation examples, which can advantageous both training attacks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Hybrid Machine Learning Method for Intrusion Detection

Data security is an important area of concern for every computer system owner. An intrusion detection system is a device or software application that monitors a network or systems for malicious activity or policy violations. Already various techniques of artificial intelligence have been used for intrusion detection. The main challenge in this area is the running speed of the available implemen...

متن کامل

Perturbation Algorithms for Adversarial Online Learning

Perturbation Algorithms for Adversarial Online Learning

متن کامل

Adversarial Deep Learning for Robust Detection of Binary Encoded Malware

Malware is constantly adapting in order to avoid detection. Model based malware detectors, such as SVM and neural networks, are vulnerable to so-called adversarial examples which are modest changes to detectable malware that allows the resulting malware to evade detection. Continuous-valued methods that are robust to adversarial examples of images have been developed using saddle-point optimiza...

متن کامل

Robust Adversarial Reinforcement Learning

Deep neural networks coupled with fast simulation and improved computation have led to recent successes in the field of reinforcement learning (RL). However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning is done in real world, the data scarcity lea...

متن کامل

Robust learning intrusion detection for attacks on wireless networks

We address the problem of evaluating the robustness of machine learning based detectors for deployment in real life networks. To this end, we employ Genetic Programming for evolving classifiers and Artificial Neural Networks as our machine learning paradigms under three different Denial-of-Service attacks at the Data Link layer (De-authentication, Authentication and Association attacks). We inv...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Future Internet

سال: 2022

ISSN: ['1999-5903']

DOI: https://doi.org/10.3390/fi14040108